Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Animals (Basel) ; 13(18)2023 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-37760384

RESUMO

Monitoring the drinking behavior of animals can provide important information for livestock farming, including the health and well-being of the animals. Measuring drinking time is labor-demanding and, thus, it is still a challenge in most livestock production systems. Computer vision technology using a low-cost camera system can be useful in overcoming this issue. The aim of this research was to develop a computer vision system for monitoring beef cattle drinking behavior. A data acquisition system, including an RGB camera and an ultrasonic sensor, was developed to record beef cattle drinking actions. We developed an algorithm for tracking the beef cattle's key body parts, such as head-ear-neck position, using a state-of-the-art deep learning architecture DeepLabCut. The extracted key points were analyzed using a long short-term memory (LSTM) model to classify drinking and non-drinking periods. A total of 70 videos were used to train and test the model and 8 videos were used for validation purposes. During the testing, the model achieved 97.35% accuracy. The results of this study will guide us to meet immediate needs and expand farmers' capability in monitoring animal health and well-being by identifying drinking behavior.

2.
Animals (Basel) ; 13(15)2023 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-37570235

RESUMO

Feeding behavior is one of the critical welfare indicators of broilers. Hence, understanding feeding behavior can provide important information regarding the usage of poultry resources and insights into farm management. Monitoring poultry behaviors is typically performed based on visual human observation. Despite the successful applications of this method, its implementation in large poultry farms takes time and effort. Thus, there is a need for automated approaches to overcome these challenges. Consequently, this study aimed to evaluate the feeding time of individual broilers by a convolutional neural network-based model. To achieve the goal of this research, 1500 images collected from a poultry farm were labeled for training the You Only Look Once (YOLO) model to detect the broilers' heads. A Euclidean distance-based tracking algorithm was developed to track the detected heads, as well. The developed algorithm estimated the broiler's feeding time by recognizing whether its head is inside the feeder. Three 1-min labeled videos were applied to evaluate the proposed algorithm's performance. The algorithm achieved an overall feeding time estimation accuracy of each broiler per visit to the feeding pan of 87.3%. In addition, the obtained results prove that the proposed algorithm can be used as a real-time tool in poultry farms.

3.
J Food Sci ; 87(1): 289-301, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34940977

RESUMO

Homogeneity of appearance attributes of bell peppers is essential for consumers and food industries. This research aimed to develop an in-line sorting system using a deep convolutional neural network (DCNN) which is considered the state-of-the-art in the field of machine vision-based classifications, for grading bell peppers into five classes. According to export standards, the crop should be graded based on maturity stage and size. For that, the fully connected layer in the ResNet50 architecture of DCNN was replaced with a developed classifier block, including a global average-pooling layer, dense layers, batch normalization, and dropout layer. The developed model was trained and evaluated through the five-fold cross-validation method. The required processing time to classify each sample in the proposed model was estimated as 4 ms which is fast enough for real-time applications. Accordingly, the DCNN model was integrated with a machine vision-based designed sorting machine. Then, the developed system was evaluated in the in-line phase. The performance parameters in the in-line phase include accuracy, precision, sensitivity, specificity, F1-score, and overall accuracies were 98.7%, 97%, 96.9%, 99%, 96.9%, and 96.9%, respectively. The total rate of sorting the bell pepper was also measured as approximately 3000 sample/h with one sorting line. The proposed sorting system demonstrates a very good capability that allows it to be used in industrial applications. PRACTICAL APPLICATION: A developed intelligent model was integrated with a machine vision-based designed sorting machine for bell peppers. The developed system can sort the crop according to export criteria with an accuracy of 96.9%. The proposed sorting system demonstrated a very good capability that allows it to be used in industrial applications.


Assuntos
Capsicum , Redes Neurais de Computação
4.
Plants (Basel) ; 10(7)2021 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-34371609

RESUMO

On-time seed variety recognition is critical to limit qualitative and quantitative yield loss and asynchronous crop production. The conventional method is a subjective and error-prone process, since it relies on human experts and usually requires accredited seed material. This paper presents a convolutional neural network (CNN) framework for automatic identification of chickpea varieties by using seed images in the visible spectrum (400-700 nm). Two low-cost devices were employed for image acquisition. Lighting and imaging (background, focus, angle, and camera-to-sample distance) conditions were variable. The VGG16 architecture was modified by a global average pooling layer, dense layers, a batch normalization layer, and a dropout layer. Distinguishing the intricate visual features of the diverse chickpea varieties and recognizing them according to these features was conceivable by the obtained model. A five-fold cross-validation was performed to evaluate the uncertainty and predictive efficiency of the CNN model. The modified deep learning model was able to recognize different chickpea seed varieties with an average classification accuracy of over 94%. In addition, the proposed vision-based model was very robust in seed variety identification, and independent of image acquisition device, light environment, and imaging settings. This opens the avenue for the extension into novel applications using mobile phones to acquire and process information in situ. The proposed procedure derives possibilities for deployment in the seed industry and mobile applications for fast and robust automated seed identification practices.

5.
Plants (Basel) ; 10(8)2021 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-34451673

RESUMO

Extending over millennia, grapevine cultivation encompasses several thousand cultivars. Cultivar (cultivated variety) identification is traditionally dealt by ampelography, requiring repeated observations by experts along the growth cycle of fruiting plants. For on-time evaluations, molecular genetics have been successfully performed, though in many instances, they are limited by the lack of referable data or the cost element. This paper presents a convolutional neural network (CNN) framework for automatic identification of grapevine cultivar by using leaf images in the visible spectrum (400-700 nm). The VGG16 architecture was modified by a global average pooling layer, dense layers, a batch normalization layer, and a dropout layer. Distinguishing the intricate visual features of diverse grapevine varieties, and recognizing them according to these features was conceivable by the obtained model. A five-fold cross-validation was performed to evaluate the uncertainty and predictive efficiency of the CNN model. The modified deep learning model was able to recognize different grapevine varieties with an average classification accuracy of over 99%. The obtained model offers a rapid, low-cost and high-throughput grapevine cultivar identification. The ambition of the obtained tool is not to substitute but complement ampelography and quantitative genetics, and in this way, assist cultivar identification services.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...